| Latest Geography NCERT Notes, Solutions and Extra Q & A (Class 8th to 12th) | |||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 8th | 9th | 10th | 11th | 12th | |||||||||||||||
Chapter 6 Introduction To Remote Sensing
Our ability to see and capture images with cameras relies on detecting light in a narrow portion of the electromagnetic spectrum. However, modern technologies allow us to perceive and measure energy from objects across a much wider range of the spectrum. This capability is the basis of **remote sensing**.
The term "remote sensing" gained prominence in the early 1960s. It is defined as the collection of processes used to acquire and measure information about the properties of objects or phenomena without being in physical contact with them. This is achieved by using a device called a **sensor** that records energy waves (electromagnetic radiation) reflected or emitted by the objects under study. Thus, the core components of remote sensing involve the object surface, the energy waves carrying information from the object, and the sensor that detects and records this energy from a distance (Figure 6.1 illustrates this conceptual frame).
Unlike human eyes or basic cameras that are sensitive only to visible light, modern remote sensing devices can detect and measure energy across a broad range of the electromagnetic spectrum, including infrared, microwave, and other regions. This allows us to gather information that is invisible to the naked eye, providing unique insights into the Earth's surface and its features.
Glossary Terms:
| Term | Definition |
|---|---|
| Absorptance | The fraction of the incoming electromagnetic radiation (energy) that is absorbed by a substance or object surface. |
| Band | A specific, continuous range or interval of wavelengths within the electromagnetic spectrum (e.g., the green band, the infrared band). |
| Digital image | A map or representation of an area composed of an array of grid cells called pixels, where each pixel has a numerical value (Digital Number - DN) representing the intensity of the detected energy for that specific location and spectral band. |
| Digital Number (DN) | The numerical value assigned to a pixel in a digital image. This value represents the intensity of the electromagnetic radiation detected by the sensor from the corresponding area on the ground for a specific spectral band. |
| Digital Image Processing | The use of computers to manipulate and analyze the numerical values (DNs) in a digital image to extract information about the features or phenomena on the Earth's surface that are represented in the image. |
| Electromagnetic Radiation (EMR) | Energy that propagates through space or a medium in the form of oscillating electric and magnetic fields. It travels at the speed of light and includes visible light, infrared, ultraviolet, radio waves, etc. |
| Electromagnetic Spectrum | The complete range or continuum of all possible electromagnetic radiation, ordered by wavelength or frequency, from very short wavelength/high frequency (e.g., cosmic rays) to very long wavelength/low frequency (e.g., radio waves). |
| False Colour Composite (FCC) | An image created by assigning visible colors (red, green, blue) to different spectral bands that are not their natural colors. For example, in a standard FCC, near-infrared energy (strongly reflected by healthy vegetation) might be displayed as red, causing healthy vegetation to appear bright red. |
| Gray scale | A range of tones from black (representing minimum intensity) to white (representing maximum intensity), with intermediate shades of grey, used to display the variations in brightness of an image, particularly black and white or single-band images. |
| Image | A pictorial or visual representation of a scene (a ground area and its features), created using various methods of detecting and recording energy, which can be photographic or non-photographic (digital). |
| Scene | The specific area on the ground that is covered by a single remote sensing image or photograph. |
| Sensor | A device used in remote sensing to detect and measure electromagnetic radiation reflected or emitted from objects and convert it into a signal that can be recorded as a photographic or digital image. |
| Reflectance | The fraction of the incoming electromagnetic radiation that is reflected by a substance or object surface. |
| Spectral Band | Synonymous with 'Band', referring to a specific range of wavelengths within the electromagnetic spectrum that a sensor is designed to detect (e.g., the green spectral band from 0.5 to 0.6 μm). |
Stages In Remote Sensing
Remote sensing involves a sequence of distinct stages, from the source of energy to the final output of information about the Earth's surface (Figure 6.2 illustrates these stages).
The basic processes involved are:
Source Of Energy
Remote sensing requires a source of energy to illuminate the objects being studied. In most applications, the **sun** is the primary source of energy. This is known as **passive remote sensing** because the sensor simply detects naturally available energy (sunlight) that is reflected from the Earth's surface. However, some remote sensing systems use **active remote sensing**, where they generate their own energy (e.g., radar, lidar) and direct it towards the target, then detect the energy that is reflected back.
Transmission Of Energy From The Source To The Surface Of The Earth
Energy travels from the source (like the sun) to the Earth's surface in the form of waves at the speed of light (approximately 300,000 km per second). This form of energy propagation is called **Electromagnetic Radiation (EMR)**. EMR varies in wavelength and frequency. The entire range of these energy waves constitutes the **Electromagnetic Spectrum** (Figure 6.3).
The spectrum includes regions like Gamma rays, X-rays, Ultraviolet (UV) rays, Visible light, Infrared (IR) radiation, Microwaves, and Radio waves. While all are forms of EMR, remote sensing typically utilizes the **visible, infrared, and microwave regions** of the spectrum, as energy in these ranges interacts with the Earth's surface and can penetrate the atmosphere effectively (or is used in active systems).
Interaction Of Energy With The Earth’s Surface
When electromagnetic radiation from the source reaches the Earth's surface, it interacts with the objects and features present there. This interaction can involve several processes:
- **Absorption:** The object absorbs a portion of the energy.
- **Transmission:** The energy passes through the object (e.g., light through water or glass).
- **Reflection:** The energy bounces off the object's surface.
- **Emission:** Objects at temperatures above absolute zero (-273°C or 0 Kelvin) also emit their own thermal energy.
Different objects and materials have unique ways of interacting with EMR. They absorb, transmit, and reflect energy differently based on their physical and chemical properties, composition, and surface characteristics (like roughness, moisture content). Moreover, the same object may interact differently with energy in different parts (spectral bands) of the electromagnetic spectrum. This unique interaction property is crucial for remote sensing, as the energy reflected or emitted by an object carries information about that object. The way an object reflects or emits energy across different wavelengths is called its **spectral signature** (Figure 6.4 shows spectral signatures of different materials).
For instance, healthy vegetation strongly reflects near-infrared energy, which is why it appears bright red in a standard False Colour Composite image. Clear water absorbs much of the visible red and near-infrared energy, appearing dark, while turbid water reflects more visible light and appears lighter (Figure 6.13a and b illustrate this).
Propagation Of Reflected/Emitted Energy Through Atmosphere
After interacting with the Earth's surface, the energy (primarily reflected solar energy or emitted thermal energy) travels back through the atmosphere towards the sensor. However, the atmosphere is not perfectly transparent. Atmospheric constituents like gases (e.g., water vapour, carbon dioxide), water molecules, and dust particles can **absorb** or **scatter** the energy as it passes through. For example, water vapour, $\text{CO}_2$, and Hydrogen absorb energy in the middle infrared region, and dust particles scatter blue light.
The portion of energy that is absorbed or scattered by the atmosphere does not reach the sensor. This atmospheric interaction can modify the energy's characteristics and reduce the amount of signal received by the sensor. Therefore, remote sensing systems often use specific spectral bands that are relatively transparent to the atmosphere, known as **atmospheric windows**, to minimize this effect.
Detection Of Reflected/Emitted Energy By The Sensor
The energy that successfully travels from the object surface through the atmosphere is detected and measured by the sensor. Remote sensing sensors are typically mounted on platforms like satellites or aircraft. Satellites provide a stable platform for collecting data over large areas systematically.
There are different types of satellite orbits (Figure 6.6 illustrates orbital types):
- **Near-Polar Sun-Synchronous Orbit:** Remote sensing satellites (like the Indian Remote Sensing Series - IRS) are commonly placed in these orbits. These orbits are relatively low (700-900 km altitude) and are designed so that the satellite passes over any given point on the Earth's surface at roughly the same local solar time each day or revisit cycle. This consistent illumination angle is important for comparing images taken at different times.
- **Geostationary Orbit:** Weather monitoring and telecommunication satellites (like the Indian National Satellite System - INSAT) are placed in these much higher orbits (approximately 36,000 km altitude) directly above the Equator. A geostationary satellite orbits the Earth at the same rate as the Earth rotates, so it appears to remain stationary over a fixed point on the ground. This allows for continuous monitoring of a large portion of the Earth's surface (about one-third of the globe).
Comparison between these orbit types (Box 6.1):
| Orbital Characteristics | Sun-Synchronous Satellites | Geostationary Satellites |
|---|---|---|
| Altitude | 700-900 km | ~36,000 km |
| Coverage | Global coverage (passes over all latitudes except very poles) | ~1/3rd of the Globe (fixed area view) |
| Orbital Period | ~14 orbits per day | 24 hours (synchronous with Earth rotation) |
| Resolution | Fine (e.g., 1 meter to 182 meters spatial resolution mentioned) | Coarse (e.g., 1 km x 1 km spatial resolution mentioned) |
| Uses | Earth Resources Applications (mapping, monitoring land, water, vegetation) | Telecommunication and Weather Monitoring |
Sensors detect the incoming EMR and convert it into a signal that can be recorded. Traditional aerial cameras acquire **photographs** on film. Satellite-based sensors often use electronic scanning mechanisms to collect data bit-by-bit, converting the detected energy into **digital signals** that are then stored as digital data products (digital images).
Conversion Of Energy Received Into Photographic/Digital Form Of Data
The detected energy is transformed into a format suitable for storage and display. In **photographic systems**, light-sensitive film captures the energy variations directly, producing a photographic image (analog data). In **digital systems**, the detected radiations are converted into electrical signals, which are then sampled and quantized into numerical values. These numerical values are stored in a digital format, typically as an array of numbers representing the image.
This digital data is transmitted electronically from the satellite to ground receiving stations (like the one near Hyderabad, India). The data is then processed to correct for errors and prepared for information extraction.
Extraction Of Information Contents From Data Products
Once the data product (image) is ready, the next stage is to analyze it to extract meaningful information about the Earth's surface. This can be done using two main approaches:
- **Visual Interpretation:** This involves manually examining the image and using visual clues (like tone, shape, pattern) to identify and interpret features. This method is used for both photographic and digital images (when displayed as pictures).
- **Digital Image Processing:** This involves using specialized computer hardware and software to manipulate and analyze the digital data (the numerical values of pixels) to automatically or semi-automatically extract information. This is necessary for working directly with digital images.
Conversion Of Information Into Map/Tabular Forms
The interpreted information, whether obtained visually or through digital processing, is then organized and presented in a usable format. This often involves creating thematic maps (maps showing the spatial distribution of specific themes like land use, vegetation types, water bodies). Quantitative information (like areas of different land cover types, lengths of features) can also be extracted and presented in tabular form.
Sensors
A **sensor** is the device that collects electromagnetic radiation, converts it into a signal, and presents the data in a format from which information about the target can be obtained. Sensors can be categorized based on their output format:
- **Photographic Sensors:** Cameras that use light-sensitive film to capture images (analog output).
- **Non-Photographic Sensors:** Devices that capture energy and convert it into electronic signals that are then digitized, producing digital data (digital output). These are often called **scanners**.
In modern satellite remote sensing, non-photographic sensors (scanners) are predominantly used. These sensors acquire images bit-by-bit by sweeping across the scene below.
Multispectral Scanners
**Multispectral Scanners (MSS)** are designed to collect data simultaneously in multiple spectral bands (different ranges of wavelengths). They use a scanning mechanism to build up an image of the ground area. A scanner typically consists of a mirror that directs incoming energy to detectors. As the mirror sweeps back and forth (or forward), the sensor records data along a scan line. The series of scan lines creates the full image. The width of the area covered by a single scan is called the **swath**.
The image is composed of small grid cells called **pixels** (picture elements). The size of these pixels on the ground determines the **spatial resolution** of the image (the ability to distinguish small objects).
Multispectral scanners are broadly classified into two types based on their scanning mechanism:
Whiskbroom Scanners
**Whiskbroom scanners** (Figure 6.7) use a single detector or a small array of detectors and a mechanically rotating or oscillating mirror that sweeps across the terrain perpendicular to the satellite's path. As the mirror rotates, it directs energy from different points along a scan line to the detector(s). The forward motion of the satellite provides the next scan line. The mirror's oscillation angle determines the swath width. The instantaneous field of view (IFOV) refers to the area on the ground viewed by the detector at any single moment. Whiskbroom scanners can collect data over a wide field of view (e.g., 90°-120°) in multiple narrow spectral bands. (Figure 6.7 shows the scanning path of a whiskbroom scanner).
Pushbroom Scanners
**Pushbroom scanners** (Figure 6.8) do not use a scanning mirror. Instead, they employ a linear array of detectors (like a comb or pushbroom) arranged perpendicular to the satellite's path. Each detector in the array is responsible for collecting energy from a small area on the ground (a pixel) along the scan line. As the satellite moves forward, the entire linear array collects data simultaneously along a swath. The number of detectors in the array is equal to the number of pixels across the swath width at the given spatial resolution. For example, a sensor with a 60 km swath and 20m resolution would need $60,000 \text{ m} / 20 \text{ m} = 3000$ detectors. Pushbroom scanners are generally simpler mechanically and can provide better spatial resolution and longer dwell times (time spent viewing a ground area) compared to whiskbroom scanners. (Figure 6.8 shows the concept of a pushbroom scanner).
Resolving Powers Of The Satellites
The capability of remote sensing satellites to capture information is often described by different types of "resolving powers" or resolutions. One important aspect is related to the time dimension:
**Temporal Resolution:** This refers to the frequency at which a satellite can revisit and acquire images of the same area on the Earth's surface. Satellites in sun-synchronous orbits have predictable revisit times. Higher temporal resolution means the satellite passes over the same spot more frequently, which is important for monitoring dynamic phenomena like vegetation growth cycles, urban expansion, or disaster impacts. Figure 6.9 shows images of the same area taken at different times of the year, demonstrating how temporal resolution allows us to observe changes in vegetation (e.g., coniferous vs. deciduous trees, cropped land) over time. (Figure 6.9 shows two satellite images of the same Himalayan area taken in May and November, highlighting changes in vegetation appearance in False Colour Composite due to seasonal changes). Figure 6.10 provides another example, comparing pre- and post-tsunami images to show the extent of damage. (Figure 6.10 (a) and (b) show satellite images of Banda Aceh, Indonesia, before (June 2004) and after (December 2004) the Indian Ocean Tsunami, illustrating the devastation).
Sensor Resolutions
Beyond temporal resolution, the quality and detail of data captured by remote sensors are described by other types of resolution: spatial, spectral, and radiometric. These characteristics determine the sensor's ability to distinguish features on the ground and differentiate between different types of features.
Spatial Resolution
**Spatial resolution** refers to the sensor's ability to distinguish between two closely spaced objects on the ground as separate entities. It is commonly expressed as the size of a single pixel on the ground (e.g., a sensor with 10-meter spatial resolution means each pixel represents a 10m x 10m area on the ground). Higher spatial resolution (smaller pixel size) allows for the identification of smaller features and more detail in the image, much like having better eyesight or using spectacles to see fine print. Table 6.1 shows spatial resolutions of various satellites, ranging from 80m for older systems to 1m for newer high-resolution systems.
Spectral Resolution
**Spectral resolution** refers to the sensor's capability to detect and record energy in specific, narrow bands of the electromagnetic spectrum. A multispectral sensor collects data in several different spectral bands (e.g., visible green, visible red, near-infrared). Higher spectral resolution means the sensor is designed to detect energy in more numerous or narrower spectral bands. This is important because different objects have unique spectral signatures (reflect or emit energy differently across wavelengths). Capturing data in multiple bands allows us to better distinguish between different types of features that might look similar in a single band (like differentiating between different types of vegetation or between healthy and unhealthy plants). This principle is related to the dispersion of light, where white light is separated into different colors based on wavelength (like in a rainbow or using a prism - Box 6.2).
Dispersion of Light (Box 6.2):
The ability of objects to interact differently with energy in various spectral regions (spectral signature) is fundamental to multispectral remote sensing. For example, healthy vegetation strongly reflects in the near-infrared band, making it appear very bright in infrared images or red in standard False Colour Composites (FCC). Clear water strongly absorbs in infrared bands, appearing dark, while turbid water reflects more in visible bands and appears lighter (Figure 6.11 illustrates spectral responses in different bands).
Colour Signatures on Standard False Colour Composite (Table 6.2):
| S. No. | Earth Surface Feature | Colour (In Standard FCC) |
|---|---|---|
| 1. | Healthy Vegetation and Cultivated Areas | |
| Evergreen | Red to magenta | |
| Deciduous | Brown to red | |
| Scrubs | Light brown with red patches | |
| Cropped land | Bright red | |
| Fallow land | Light blue to white | |
| 2. | Waterbody | |
| Clear water | Dark blue to black | |
| Turbid waterbody | Light blue | |
| 3. | Built – up area | |
| High density | Dark blue to bluish green | |
| Low density | Light blue | |
| 4. | Waste lands/Rock outcrops | |
| Rock outcrops | Light brown | |
| Sandy deserts/River sand/Salt affected | Light blue to white | |
| Deep ravines | Dark green | |
| Shallow ravines | Light green | |
| Water logged/Wet lands | Motelled black |
Radiometric Resolution
**Radiometric resolution** refers to the sensor's ability to distinguish between very slight differences in the intensity of the energy it detects. This is related to the number of discrete brightness levels or Digital Numbers (DN values) that the sensor can record for each pixel in a given spectral band. Higher radiometric resolution means the sensor can detect smaller variations in radiance (the energy reflected or emitted from the target), allowing for finer distinctions between different types of materials or conditions on the ground. For example, a sensor with 8-bit radiometric resolution can record 256 distinct brightness levels (0-255), while a sensor with 6-bit resolution records only 64 levels (0-63). Higher radiometric resolution generally results in images with more subtle tonal variations.
Table 6.1 provides a comparison of the spatial, spectral (number of bands), and radiometric resolutions of sensors on selected satellites like Landsat, IRS, and SPOT.
| Satellite/Sensor | Spatial Resolution (in metres) | Number of Bands | Radiometric Range (Number of Grey Level Variations) |
|---|---|---|---|
| Landsat MSS (USA) | 80.0 $\times$ 80.0 | 4 | 0 - 63 (6-bit) |
| IRS LISS – I (India) | 72.5 $\times$ 72.5 | 4 | 0 - 127 (7-bit) |
| IRS LISS – II (India) | 36.25 $\times$ 36.25 | 4 | 0 - 127 (7-bit) |
| Landsat TM (USA) | 30.00 $\times$ 30.00 | 7 | 0 - 255 (8-bit) |
| IRS LISS III (India) | 23.00 $\times$ 23.00 | 4 | 0 - 127 (7-bit) |
| SPOT HRV - I (France) | 20.00 $\times$ 20.00 | 4 | 0 - 255 (8-bit) |
| SPOT HRV – II (France) | 10.00 $\times$ 10.00 | 1 (Panchromatic) | 0 - 255 (8-bit) |
| IRS PAN (India) | 5.80 $\times$ 5.80 | 1 (Panchromatic) | 0 - 127 (7-bit) |
Note: The Radiometric Range usually corresponds to $2^n - 1$, where 'n' is the number of bits. E.g., 6-bit resolution has $2^6 = 64$ levels (0-63), 7-bit has $2^7=128$ levels (0-127), and 8-bit has $2^8=256$ levels (0-255). The table correctly lists the number of levels as the range upper bound + 1.
Data Products
The raw output from remote sensors, representing the recorded electromagnetic energy, is referred to as remote sensing **data products**. These products can be in different formats depending on the sensor type and recording mechanism.
The two main types of remote sensing data products are:
- **Photographic Images:** Produced by photographic sensors (cameras) that record energy onto film.
- **Digital Images:** Produced by scanning sensors that convert detected energy into electronic signals and then digitize them into numerical values.
It's important to note the distinction between the terms **image** and **photograph**. An **image** is a general term for any pictorial representation, regardless of the part of the electromagnetic spectrum used or the recording method. A **photograph** specifically refers to an image recorded on light-sensitive photographic film, typically within the optical (visible and near-infrared) region of the spectrum. Thus, all photographs are images, but not all images are photographs (e.g., a radar image is an image but not a photograph).
Photographic Images
**Photographic images**, or aerial photographs (when taken from aircraft), are typically acquired in the optical region (0.3-0.9 μm) using cameras with different types of film emulsions. Common film types include black and white (sensitive to visible light intensity), color (sensitive to visible red, green, blue), black and white infrared (sensitive to visible light and near-infrared), and color infrared (sensitive to green, red, and near-infrared, displayed in false colors). Black and white film is frequently used in aerial photography. Photographic images can often be enlarged to some extent without losing significant detail, although excessive enlargement can reveal graininess.
Digital Images
**Digital images**, acquired by scanning sensors, are composed of a grid of discrete picture elements called **pixels**. Each pixel represents a specific small area on the ground. A digital image is essentially a numerical representation, where each pixel has an assigned numerical value known as a **Digital Number (DN)**. This DN value represents the intensity of the electromagnetic radiation detected by the sensor from that ground area within a particular spectral band. The range of possible DN values depends on the radiometric resolution of the sensor (e.g., 0-255 for 8-bit resolution).
The level of detail visible in a digital image is directly related to the pixel size (spatial resolution). Smaller pixels capture more detail. However, like photographic images, zooming into a digital image beyond its original resolution will cause a loss of fine detail and result in the pixelated appearance of the image, where individual pixels become visible (Figure 6.12 illustrates pixels and DN values). (Figure 6.12 shows a digital image, a zoomed-in portion showing individual pixels, and a table listing the Digital Number values for those pixels). Digital image processing techniques involve working with these DN values numerically.
Digital images can be displayed visually by converting the DN values into corresponding gray tones (for single-band images) or colors (by combining data from multiple bands to create color or false-color composite images).
Interpretation Of Satellite Imageries
Once remote sensing data products (images) are acquired and processed, the next critical step is to **interpret** them to extract useful information about the Earth's surface features, their forms, patterns, and changes. Information extraction can be done either manually through visual interpretation or computationally through digital image processing.
**Visual interpretation** is a manual process that involves examining the image visually to identify and interpret objects and phenomena based on their appearance. This method relies on human perception and understanding of geographical features. **Digital image processing** uses computer algorithms to analyze the numerical values of pixels in a digital image to automatically or semi-automatically classify features, detect changes, or perform other analyses. While digital processing is powerful, visual interpretation remains important, especially for complex scenes or qualitative analysis.
Due to the complexity of digital image processing techniques (requiring specialized software and hardware), this discussion will focus on the **elements of visual interpretation**.
Elements Of Visual Interpretation
When visually interpreting remote sensing images, we use characteristic properties of objects as they appear in the image, similar to how we identify objects in everyday life. These characteristics are called the elements of visual interpretation. They can be grouped into image characteristics (related to how the object appears in the image) and terrain characteristics (related to the object's geographical context).
Image Characteristics:
Tone Or Colour
**Tone** refers to the shades of gray in a black and white image, while **colour** refers to the hues and intensity of colors in a color image (either true color or false color). These are determined by the amount of electromagnetic energy reflected (or emitted) by an object and recorded by the sensor in a specific spectral band. Different objects reflect energy differently based on their surface properties (roughness, moisture, composition) and the angle of illumination. Smooth, dry surfaces generally reflect more energy than rough, moist surfaces. Furthermore, the object's reflectance varies significantly across the spectrum (spectral signature). By analyzing tone or color in different bands or color composites, we can often distinguish between different types of features. For example, healthy vegetation, which strongly reflects near-infrared energy, appears in distinct tones or colors (like bright red in a standard False Colour Composite) that help differentiate it from other surfaces. Water bodies absorb most near-infrared energy and appear dark (Figure 6.13a and b).
Texture
**Texture** refers to the visual roughness or smoothness of an area in an image, resulting from the frequency and arrangement of tonal or color variations within it. It arises from the aggregation of features that are too small or too numerous to be individually distinguished at the image's resolution. Fine textures have little tonal variation over short distances (appearing smooth), while coarse textures show rapid and large tonal variations (appearing rough). Texture can help differentiate between areas with similar average tone or color but different spatial arrangements of features. Examples include distinguishing between different types of residential areas (dense housing vs. scattered houses), different types of crops, or different vegetation types. Figure 6.14 illustrates coarse and fine textures. (Figure 6.14 shows examples of coarse texture (possibly mangroves or rough terrain) and fine texture (possibly cropped land or smooth surface) in images).
Size
**Size** refers to the dimensions (length, width, area) of an object as represented in the image, determined by the image's resolution or scale. The apparent size of an object is a crucial clue for its identification. For example, the large size and distinctive layout of an industrial complex can distinguish it from smaller residential buildings (Figure 6.15). Similarly, the size helps in identifying settlements of different hierarchy (villages vs. towns vs. cities) or differentiating between large recreational facilities (stadium, race course) and smaller features. (Figure 6.15 shows variations in size and layout between institutional buildings and residential areas).
Shape
**Shape** refers to the overall form, outline, or configuration of an object. The shape of an object is often one of the most distinctive visual cues for its identification in a remote sensing image. Some objects have very unique shapes that make them easily recognizable, such as the shape of a specific famous building. Linear features also have characteristic shapes; for example, a railway line typically appears as a long, continuous, relatively straight line with gradual curves, distinct from the more irregular or angular bends often found in roads (Figure 6.16). Shape also helps in identifying different types of land uses or facilities. (Figure 6.16 shows the distinctive shape differences between a railway track (curvilinear) and a road (sharper bends)).
Shadow
**Shadow** is formed when an object obstructs the illumination source (usually the sun), creating a dark area behind it. The shape and length of the shadow are functions of the object's height and the angle of the light source. Shadows can sometimes help in identifying objects, especially those with distinctive shapes (like a tall building, a minaret, or an overhead water tank). However, shadows can also obscure features located within the shadowed area, making interpretation difficult. In satellite images, due to the high altitude and sun-synchronous orbit, shadows are usually less prominent compared to large-scale aerial photographs where the perspective and lower altitude can create more pronounced shadows. Shadows are less utilized as a primary interpretation element in satellite images compared to aerial photos, particularly over areas with tall structures where they can obscure features on the ground.
Pattern
**Pattern** refers to the spatial arrangement of multiple objects or features that exhibit a recognizable order or repetition. Features that occur in a characteristic arrangement or layout can be identified by recognizing their pattern. For example, planned residential areas often consist of houses of similar size and layout arranged in a regular grid or pattern (Figure 6.17). Orchards and plantations show characteristic patterns of uniformly spaced trees. Different drainage patterns (dendritic, trellis, etc.) or settlement patterns (linear, scattered) can also be identified by observing the arrangement of rivers or buildings. (Figure 6.17 shows a planned residential area identifiable by the regular arrangement of houses and roads).
Association
**Association** refers to the geographical relationship between an object and other surrounding objects or features. Objects often occur in relation to specific environmental settings or human-made facilities. Identifying features based on their common association with other features helps in confirming their identity. For example, an educational institution is often associated with its location in or near a residential area and the presence of playgrounds or sports fields. Industrial areas are commonly found along highways on the outskirts of cities. Slums may be associated with locations along railway lines or drainage channels. Recognizing these typical associations provides strong clues for interpretation.
Additional resources for further information on remote sensing can be found online (as listed in the text):
- www.isro.gov.in (Indian Space Research Organisation)
- www.nrsc.gov.in (National Remote Sensing Centre)
- www.iirs.gov.in (Indian Institute of Remote Sensing)
Exercise
(Exercise questions are not included as per instructions.)
Activity
(Activity descriptions are not included as per instructions.)